970 research outputs found

    Growing a Bayesian Conspiracy Theorist: An Agent-Based Model

    Get PDF
    Conspiracy theories cover topics from politicians to world events. Frequently, proponents of conspiracies hold these beliefs strongly despite available evidence that may challenge or disprove them. Therefore, conspiratorial reasoning has often been described as illegitimate or flawed. Here, we explore the possibility of growing a rational (Bayesian) conspiracy theorist through an Agent-Based Model. The agent has reasonable constraints on access to the total information as well its access to the global population. The model shows that network structures are central to maintain objectively mistaken beliefs. Increasing the size of the available network, yielded increased confidence in mistaken beliefs and subsequent network pruning, allowing for belief purism. Rather than ameliorating and correcting mistaken beliefs (where agents move toward the correct mean), large networks appear to maintain and strengthen them. As such, large networks may increase the potential for belief polarization, extreme beliefs, and conspiratorial thinking – even amongst Bayesian agents

    Prolegomena to a Theory and Model of Spoken Persuasion: A Subjective-Probabilistic Interactive Model of Persuasion (SPIMP)

    Get PDF
    Various disciplines such as rhetoric, marketing, and psychology have explored persuasion as a social and argumentative phenomenon. The present thesis is predominantly based in cognitive psychology and investigates the psychological processes the persuadee undergoes when faced with a persuasive attempt. The exploration concludes with the development of a concrete model for describing persuasion processing, namely The Subjective-Probabilistic Interactive Model of Persuasion (SPIMP). In addition to cognitive psychology, the thesis relies on conceptual developments and empirical data from disciplines such as rhetoric, economics, and philosophy. The core model of the SPIMP relies on two central persuasive elements: content strength and source credibility. These elements are approached from a subjective perspective in which the persuadee estimates the probabilistic likelihood of how strong the content and how credible the source is. The elements, however, are embedded in a larger psychological framework such that the subjective estimations are contextual and social rather than solipsistic. The psychological framework relies on internal and external influences, the scope of cognition, and the framework for cognition. The SPIMP departs significantly from previous models of persuasion in a number of ways. For instance, the latter are dual-processing models whereas the SPIMP is an integrated single-process approach. Further, the normative stances differ since the previous models seemingly rely on a logicist framework whereas SPIMP relies on a probabilistic. The development of a new core model of persuasion processing constitutes a novel contribution. Further, the theoretical and psychological framework surrounding the elements of the model provides a novel framework for conceptualising persuasion processing from the perspective of the persuadee. Finally, given the multitude of disciplines connected to persuasion, the thesis provides a definition for use in future studies, which differentiates persuasion from argumentation, communicated information updating, and influence

    Strategic advantages of micro-targeted campaigns: Implementing savvy persuaders in a Bayesian Agent-Based Model

    Get PDF
    Predicting the effect of persuasion campaigns is difficult, as belief changes may cascade through a network. In recent years, political campaigns have adopted micro-targeting strategies that segment voters into fine-grained clusters and target those cluster more specifically. At present, there is little evidence that explores the efficiency of this method. Through an Agent-Based Model, the current paper provides a novel method for exploring predicted effects of strategic persuasion campaigns. The voters in the model are rational and revise their beliefs in the propositions expounded by the politicians in accordance with Bayesian belief updating through a source credibility model. The model provides a proof of concept and shows strategic advantages of micro-targeted campaigning. Despite having only little voter data allowing crude segmentation, the micro-targeted campaign consistently beat stochastic campaigns with the same reach. However, given substantially greater reach, a positively perceived stochastic candidate can nullify or beat a strategic persuasion campaigns

    Shared decision-making and planning end-of-life care for patients with end-stage kidney disease: a protocol for developing and testing a complex intervention

    Get PDF
    Background Internationally, it has been stressed that advance care planning integrated within kidney services can lead to more patients being involved in decisions for end-of-life care. In Denmark, there is no systematic approach to advance care planning and end-of-life care interventions within kidney services. A shared decision-making intervention for planning end-of-life care may support more effective treatment management between patients with end-stage kidney disease, their relatives and the health professionals. The purpose of this research is to find evidence to design a shared decision-making intervention and test its acceptability to patients with end-stage kidney disease, their relatives, and health professionals in Danish kidney services. Methods This research project will be conducted from November 2020 to November 2023 and is structured according to the UK Medical Research Council framework for complex intervention design and evaluation research. The development phase research includes mixed method surveys. First, a systematic literature review synthesising primary empirical evidence of patient-involvement interventions for patients with end-stage kidney disease making end-of-life care decisions will be conducted. Second, interview methods will be carried out with patients with end-stage kidney disease, relatives, and health professionals to identify experiences of involvement in decision-making and decisional needs when planning end-of-life care. Findings will inform the co-design of the shared decision-making intervention using an iterative process with our multiple-stakeholder steering committee. A pilot test across five kidney units assessing if the shared decision-making intervention is acceptable and feasible to patients, relatives, and health professionals providing services to support delivery of care in kidney services. Discussion This research will provide evidence informing the content and design of a shared decision-making intervention supporting patient-professional planning of end-of-life care for patients with end-stage kidney disease, and assessing its acceptability and feasibility when integrated within Danish kidney units. This research is the first step to innovating the involvement of patients in end-of-life care planning with kidney professionals

    Contribution of income and job strain to the association between education and cardiovascular disease in 1.6 million Danish employees

    Get PDF
    AIMS: We examined the extent to which associations between education and cardiovascular disease (CVD) morbidity and mortality are attributable to income and work stress. METHODS AND RESULTS: We included all employed Danish residents aged 30-59 years in 2000. Cardiovascular disease morbidity analyses included 1 638 270 individuals, free of cardiometabolic disease (CVD or diabetes). Mortality analyses included 41 944 individuals with cardiometabolic disease. We assessed education and income annually from population registers and work stress, defined as job strain, with a job-exposure matrix. Outcomes were ascertained until 2014 from health registers and risk was estimated using Cox regression. During 10 957 399 (men) and 10 776 516 person-years (women), we identified 51 585 and 24 075 incident CVD cases, respectively. For men with low education, risk of CVD was 1.62 [95% confidence interval (CI) 1.58-1.66] before and 1.46 (95% CI 1.42-1.50) after adjustment for income and job strain (25% reduction). In women, estimates were 1.66 (95% CI 1.61-1.72) and 1.53 (95% CI 1.47-1.58) (21% reduction). Of individuals with cardiometabolic disease, 1736 men (362 234 person-years) and 341 women (179 402 person-years) died from CVD. Education predicted CVD mortality in both sexes. Estimates were reduced with 54% (men) and 33% (women) after adjustment for income and job strain. CONCLUSION: Low education predicted incident CVD in initially healthy individuals and CVD mortality in individuals with prevalent cardiometabolic disease. In men with cardiometabolic disease, income and job strain explained half of the higher CVD mortality in the low education group. In healthy men and in women regardless of cardiometabolic disease, these factors explained 21-33% of the higher CVD morbidity and mortality

    Detection of rare variant effects in association studies: extreme values, iterative regression, and a hybrid approach

    Get PDF
    We develop statistical methods for detecting rare variants that are associated with quantitative traits. We propose two strategies and their combination for this purpose: the iterative regression strategy and the extreme values strategy. In the iterative regression strategy, we use iterative regression on residuals and a multimarker association test to identify a group of significant variants. In the extreme values strategy, we use individuals with extreme trait values to select candidate genes and then test only these candidate genes. These two strategies are integrated into a hybrid approach through a weighting technology. We apply the proposed methods to analyze the Genetic Analysis Workshop 17 data set. The results show that the hybrid approach is the most powerful approach. Using the hybrid approach, the average power to detect causal genes for Q1 is about 40% and the powers to detect FLT1 and KDR are 100% and 68% for Q1, respectively. The powers to detect VNN3 and BCHE are 34% and 30% for Q2, respectively

    Detecting functional rare variants by collapsing and incorporating functional annotation in Genetic Analysis Workshop 17 mini-exome data

    Get PDF
    Association studies using tag SNPs have been successful in detecting disease-associated common variants. However, common variants, with rare exceptions, explain only at most 5–10% of the heritability resulting from genetic factors, which leads to the common disease/rare variants assumption. Indeed, recent studies using sequencing technologies have demonstrated that common diseases can be due to rare variants that could not be systematically studied earlier. Unfortunately, methods for common variants are not optimal if applied to rare variants. To identify rare variants that affect disease risk, several investigators have designed new approaches based on the idea of collapsing different rare variants inside the same genomic block (e.g., the same gene or pathway) to enrich the signal. Here, we consider three different collapsing methods in the multimarker regression model and compared their performance on the Genetic Analysis Workshop 17 data using the consistency of results across different simulations and the cross-validation prediction error rate. The comparison shows that the proportion collapsing method seems to outperform the other two methods and can find both truly associated rare and common variants. Moreover, we explore one way of incorporating the functional annotations for the variants in the data that collapses nonsynonymous and synonymous variants separately to allow for different penalties on them. The incorporation of functional annotations led to higher sensitivity and specificity levels when the detection results were compared with the answer sheet. The initial analysis was performed without knowledge of the simulating model

    Selective scattering between Floquet-Bloch and Volkov states in a topological insulator

    Get PDF
    The coherent optical manipulation of solids is emerging as a promising way to engineer novel quantum states of matter. The strong time periodic potential of intense laser light can be used to generate hybrid photon-electron states. Interaction of light with Bloch states leads to Floquet-Bloch states which are essential in realizing new photo-induced quantum phases. Similarly, dressing of free electron states near the surface of a solid generates Volkov states which are used to study non-linear optics in atoms and semiconductors. The interaction of these two dynamic states with each other remains an open experimental problem. Here we use Time and Angle Resolved Photoemission Spectroscopy (Tr-ARPES) to selectively study the transition between these two states on the surface of the topological insulator Bi2Se3. We find that the coupling between the two strongly depends on the electron momentum, providing a route to enhance or inhibit it. Moreover, by controlling the light polarization we can negate Volkov states in order to generate pure Floquet-Bloch states. This work establishes a systematic path for the coherent manipulation of solids via light-matter interaction.Comment: 21 pages, 6 figures, final version to appear in Nature Physic

    Rare variant collapsing in conjunction with mean log p-value and gradient boosting approaches applied to Genetic Analysis Workshop 17 data

    Get PDF
    In addition to methods that can identify common variants associated with susceptibility to common diseases, there has been increasing interest in approaches that can identify rare genetic variants. We use the simulated data provided to the participants of Genetic Analysis Workshop 17 (GAW17) to identify both rare and common single-nucleotide polymorphisms and pathways associated with disease status. We apply a rare variant collapsing approach and the usual association tests for common variants to identify candidates for further analysis using pathway-based and tree-based ensemble approaches. We use the mean log p-value approach to identify a top set of pathways and compare it to those used in simulation of GAW17 dataset. We conclude that the mean log p-value approach is able to identify those pathways in the top list and also related pathways. We also use the stochastic gradient boosting approach for the selected subset of single-nucleotide polymorphisms. When compared the result of this tree-based method with the list of single-nucleotide polymorphisms used in dataset simulation, in addition to correct SNPs we observe number of false positives

    Health services research in the public healthcare system in Hong Kong: An analysis of over 1 million antihypertensive prescriptions between 2004-2007 as an example of the potential and pitfalls of using routinely collected electronic patient data

    Get PDF
    <b>Objectives</b> Increasing use is being made of routinely collected electronic patient data in health services research. The aim of the present study was to evaluate the potential usefulness of a comprehensive database used routinely in the public healthcare system in Hong Kong, using antihypertensive drug prescriptions in primary care as an example.<p></p> <b>Methods</b> Data on antihypertensive drug prescriptions were retrieved from the electronic Clinical Management System (e-CMS) of all primary care clinics run by the Health Authority (HA) in the New Territory East (NTE) cluster of Hong Kong between January 2004 and June 2007. Information was also retrieved on patients’ demographic and socioeconomic characteristics, visit type (new or follow-up), and relevant diseases (International Classification of Primary Care, ICPC codes). <p></p> <b>Results</b> 1,096,282 visit episodes were accessed, representing 93,450 patients. Patients’ demographic and socio-economic details were recorded in all cases. Prescription details for anti-hypertensive drugs were missing in only 18 patients (0.02%). However, ICPC-code was missing for 36,409 patients (39%). Significant independent predictors of whether disease codes were applied included patient age > 70 years (OR 2.18), female gender (OR 1.20), district of residence (range of ORs in more rural districts; 0.32-0.41), type of clinic (OR in Family Medicine Specialist Clinics; 1.45) and type of visit (OR follow-up visit; 2.39). <p></p> In the 57,041 patients with an ICPC-code, uncomplicated hypertension (ICPC K86) was recorded in 45,859 patients (82.1%). The characteristics of these patients were very similar to those of the non-coded group, suggesting that most non-coded patients on antihypertensive drugs are likely to have uncomplicated hypertension. <p></p> <b>Conclusion</b> The e-CMS database of the HA in Hong Kong varies in quality in terms of recorded information. Potential future health services research using demographic and prescription information is highly feasible but for disease-specific research dependant on ICPC codes some caution is warranted. In the case of uncomplicated hypertension, future research on pharmaco-epidemiology (such as prescription patterns) and clinical issues (such as side-effects of medications on metabolic parameters) seems feasible given the large size of the data set and the comparability of coded and non-coded patients
    corecore